Random Search Algorithms
نویسنده
چکیده
Random search algorithms are useful for many ill-structured global optimization problems with continuous and/or discrete variables. Typically random search algorithms sacrifice a guarantee of optimality for finding a good solution quickly with convergence results in probability. Random search algorithms include simulated annealing, tabu search, genetic algorithms, evolutionary programming, particle swarm optimization, ant colony optimization, cross-entropy, stochastic approximation, multistart and clustering algorithms, to name a few. They may be categorized as global (exploration) versus local (exploitation) search, or instance-based versus model-based. However, one feature these methods share is the use of probability in determining their iterative procedures. This article provides an overview of these random search algorithms, with a probabilistic view that ties them together. A random search algorithm refers to an algorithm that uses some kind of randomness or probability (typically in the form of a pseudo-random number generator) in the definition of the method, and in the literature, may be called a Monte Carlo method or a stochastic algorithm. The term metaheuristic is also commonly associated with random search algorithms. Simulated annealing, tabu search, genetic algorithms, evolutionary programming, particle swarm optimization, ant colony optimization, cross-entropy, stochastic approximation, multi-start, clustering algorithms, and other random search methods are being widely applied to continuous and discrete global optimization problems, see, for example, [7, 8, 24, 45, 47, 64, 67, 73]. Random search algorithms are useful for ill-structured global optimization problems, where the objective function may be nonconvex, nondifferentiable, and possibly discontinuous over a continuous, discrete, or mixed continuous-discrete domain. A global optimization problem with continuous variables may contain several local optima or stationary points. A problem with discrete variables falls into the category of combinatorial optimization and is often typified by the Traveling Salesperson Problem (TSP). A combination of continuous and discrete variables arises in many complex systems including engineering design problems, scheduling and sequencing problems, and other applications in biological and economic systems. The problem of designing algorithms that obtain global optimal solutions is very difficult when there is no overriding structure that indicates whether a local solution is indeed a global ∗Department of Industrial and Systems Engineering, University of Washington, Seattle, WA, 98195–2650, USA, [email protected] solution. In contrast to deterministic methods (such as branch and bound, interval analysis, and tunnelling methods [27, 28, 45]), which typically guarantee asymptotic convergence to the optimum, random search algorithms ensure convergence in probability. The tradeoff is in terms of computational effort. Random search algorithms are popular because they can provide a relatively good solution quickly and easily. Random search methods have been shown to have a potential to solve large-scale problems efficiently in a way that is not possible for deterministic algorithms. Whereas it is known that a deterministic method for global optimization is NP-hard [69], there is evidence that a stochastic algorithm can be executed in polynomial time, on the average. For instance, Dyer and Frieze [21, 22] showed that estimating the volume of a convex body takes an exponential number of function evaluations for any deterministic algorithm, but if one is willing to accept a weaker claim of being correct with an estimate that has a high probability of being correct, then a stochastic algorithm can provide such an estimate in polynomial time. Another advantage of random search methods is that they are relatively easy to implement on complex problems with “black-box” function evaluations. Because the methods typically only rely on function evaluations, rather than gradient and Hessian information, they can be coded quickly, and applied to a broad class of global optimization problems. A disadvantage of these methods is that they are currently customized to each specific problem largely through trial and error. A common experience is that random search algorithms perform well and are “robust” in the sense that they give useful information quickly for ill-structured global optimization problems. Different types of categorizations have been suggested for random search algorithms. Schoen [57] provides a classification of a two-phase method, where one phase is a global search phase and the other is a local search phase. An example of a two-phase method is multi-start, where a local search algorithm is initiated from a starting point that is globally, typically uniformly, distributed. Often the local phase in multi-start is a deterministic gradient search algorithm, although researchers have experimented with using simulated annealing in the local phase of multi-start. The global phase can be viewed as an exploration phase aimed at exploring the entire feasible region, while the local phase can be viewed as an exploitation phase aimed at exploiting local information (e.g. gradient or nearest neighbor information). Zlochin [77] categorizes random search algorithms as instance-based or model-based, where instance-based methods generate new candidate points based on the current point or population of points, and model-based methods rely on an explicit sampling distribution and update parameters of the probability distribution. Examples of instance-based algorithms include simulated annealing, genetic algorithms, and tabu search, whereas examples of model-based algorithms include ant colony optimization, stochastic gradient search, and the cross-entropy method. To understand the differences between random search algorithms, we concentrate on the procedures of generating new candidate points, and updating the collection of points maintained. The next section describes generating and updating procedures for several random search algorithms. Even though instance-based methods may use heuristics to generate and update points, they do implicitly induce a sampling distribution. Abstracting the random search algorithms by a sequence of sampling distributions (whether implicit or explicit) provides a means to understand the performance of the algorithms. In Section 2 we analyze the performance of Pure Random Search, Pure Adaptive Search and Annealing Adaptive
منابع مشابه
A heuristic approach for multi-stage sequence-dependent group scheduling problems
We present several heuristic algorithms based on tabu search for solving the multi-stage sequence-dependent group scheduling (SDGS) problem by considering minimization of makespan as the criterion. As the problem is recognized to be strongly NP-hard, several meta (tabu) search-based solution algorithms are developed to efficiently solve industry-size problem instances. Also, two different initi...
متن کاملQuasi Random Deployment Strategy for Reliable Communication Backbones in Wireless Sensor Networks
Topology construction and topology maintenance are significant sub-problems of topology control. Spanning tree based algorithms for topology control are basically transmission range based type construction algorithms. The construction of an effective backbone, however, is indirectly related to the placement of nodes. Also, the dependence of network reliability on the communication path undertak...
متن کاملA Random Forest Classifier based on Genetic Algorithm for Cardiovascular Diseases Diagnosis (RESEARCH NOTE)
Machine learning-based classification techniques provide support for the decision making process in the field of healthcare, especially in disease diagnosis, prognosis and screening. Healthcare datasets are voluminous in nature and their high dimensionality problem comprises in terms of slower learning rate and higher computational cost. Feature selection is expected to deal with the high dimen...
متن کاملComparison of particle swarm optimization and tabu search algorithms for portfolio selection problem
Using Metaheuristics models and Evolutionary Algorithms for solving portfolio problem has been considered in recent years.In this study, by using particles swarm optimization and tabu search algorithms we optimized two-sided risk measures . A standard exact penalty function transforms the considered portfolio selection problem into an equivalent unconstrained minimization problem. And in final...
متن کاملA New Hybrid Method for Web Pages Ranking in Search Engines
There are many algorithms for optimizing the search engine results, ranking takes place according to one or more parameters such as; Backward Links, Forward Links, Content, click through rate and etc. The quality and performance of these algorithms depend on the listed parameters. The ranking is one of the most important components of the search engine that represents the degree of the vitality...
متن کامل